This tutorial demonstrates how to perform randomized benchmarking (RB) using pygsti
. While RB is a very distinct protocol from Gate Set Tomography (GST), pygsti
includes basic support for RB because of its prevalence in the community, its simplicity, and its considerable use of GST-related concepts and data structures. The core protocol is standard Clifford randomized benchmarking defined in "Scalable and Robust Benchmarking of Quantum Processes". Much of the notation is consistent with Wallman and Flammia's "Randomized benchmarking with confidence".
This tutorial will show the following, all in the context of benchmarking a single qubit:
GateString
objects.DataSet
filled with RB sequence data.We'll begin by importing relevant modules:
from __future__ import print_function #python 2 & 3 compatibility
import pygsti
from pygsti.extras import rb
from pygsti.construction import std1Q_XYI
/Users/enielse/research/pyGSTi/packages/pygsti/tools/matrixtools.py:23: UserWarning: Could not import Cython extension - falling back to slower pure-python routines _warnings.warn("Could not import Cython extension - falling back to slower pure-python routines")
First, let's choose a "target" gateset. This is the set of physically-implemented, or "primitive" gates. For this tutorial, we'll just use the standard $I$, $X(\pi/2)$, $Y(\pi/2)$ set. The target gateset should generate the Clifford group (or some other unitary 2-design).
gs_target = std1Q_XYI.gs_target
print("Primitive gates = ", gs_target.gates.keys())
Primitive gates = [u'Gi', u'Gx', u'Gy']
To generate appropriately random RB sequences, we'll need to know how the set of all the Clifford gates map onto the given primitive set (since RB requires sequences to be random sequences of Cliffords, not of primitive gates). PyGSTi already contains the group of 1-qubit Cliffords. Benchmarking of a different group, or the $n>1$ qubit Clifford group requires the user to define this group.
PyGSTi contains a standard compilation of each 1-qubit Clifford into the gates $\{I,X(\pi/2),Y(\pi/2)\}$, which we will use here.
clifford_to_primitive = std1Q_XYI.clifford_compilation
# get the 1Q Clifford group: the canonical set of superoperator matrices representing the Clifford group, used later.
clifford_group = rb.std1Q.clifford_group
Now let's decide what random Clifford sequences to generate. We use $m$ to denote the length of a Clifford sequence, in Clifford gates and not including the inversion Clifford at the end of each sequence. $K_m$ denotes the number of different random sequences of length $m$ to use. Note: K_m_sched
need not be $m$-independent, and can be a dictionary, with $(m,K_m)$ key-value pairs.
m_list = [1,101,201,301,401,501,601,701,801,801,1001]
K_m = 10
Now we generate the list of random RB Clifford sequences to run. The write_empty_rb_files
function handles this job, and does a lot. Here's what this one function call does:
alias_maps
).DataSet
is saved in text format using the RB sequences expressed in terms of Clifford gates.filename_base = 'tutorial_files/rb_template'
rb_sequences = rb.write_empty_rb_files(filename_base, m_list, K_m, clifford_group,
{'primitive': clifford_to_primitive},
seed=0)
There is now an empty template file tutorial_files/rb_template.txt. For actual physical experiments, this file should be filled with experimental data and read in using pygsti.io.load_dataset
. In this tutorial, we will generate fake data instead and just use the resulting dataset object.
The files tutorial_files/rb_template_clifford.txt and tutorial_files/rb_template_primitive.txt are text files listing all the RB sequences, expressed in terms of Cliffords and primitives respectively.
To generate a dataset, we first need to make a gateset. Here we assume a gate set that is perfect except for some small amount of depolarizing noise on each primitive gate.
depol_strength = 1e-3
gs_experimental = std1Q_XYI.gs_target
gs_experimental = gs_experimental.depolarize(gate_noise=depol_strength)
Now we choose the number of clicks per experiment and simulate our data. More information on simulating RB can be found in the following tutorial.
all_rb_sequences = [] #construct an aggregate list of Clifford sequences
for seqs_for_single_cliff_len in rb_sequences:
all_rb_sequences.extend(seqs_for_single_cliff_len)
N=100 # number of samples
rb_data = pygsti.construction.generate_fake_data(
gs_experimental,all_rb_sequences,N,'binomial',seed=1,
aliasDict=clifford_to_primitive, collisionAction="keepseparate")
Now that we have data, it's time to perform the RB analysis. The
function do_randomized_benchmarking
returns an RBResults
object which holds all the relevant input and output RB quantities. This object can be used to generate error bars on the computed RB quanties.
Some important arguments are:
rb_results = rb.do_randomized_benchmarking(rb_data, all_rb_sequences,fit='first order',success_outcomelabel='0', dim=2)
Okay, so we've done RB! Now let's examine how we can use the returned RBResults
object to visualize and inspect the results. First let's plot the averaged RB data (i.e., averaged over sequences at each length) and the decay curve that has been fit to the data.
Some useful optional arguments are: xlim, ylim, save_fig_path, loc, which all perform the standard matploblib functions, and also legend (true or false), title (true or false).
#Create a workspace to show plots
w = pygsti.report.Workspace()
w.init_notebook_mode(connected=False, autodisplay=True)
w.RandomizedBenchmarkingPlot(rb_results)
<pygsti.report.workspaceplots.RandomizedBenchmarkingPlot at 0x10b076a50>
Let's look at the RB fit results. The parameters are defined as follows, following the Wallman and Flammia article cited above.
A
,B
,f
are fit parameters to the standard RB fitting function $P_m = A+B\,f^m$, where $P_m$ is the average "survival probability" for sequences of length $m$.r
$= (1-p)(d-1)/d$. This is the the "RB number".rb_results.print_results()
RB results - Fitting to the first order fitting function: A + (B+Cm)*f^m. A = 0.5158046259207254 B = 0.48752902145336097 C = -8.702270289706322e-09 f = 0.996307105067199 r = 0.0018464474664005026
Lastly, let's put some error bars on the estimates. There are two methods for computing the error bars: analytic error bars using the method of Wallman and Flammia in "Randomized benchmarking with confidence", or bootstrapped error bars. Error bars here are 1-sigma confidence intervals. The Wallman and Flammia method is only possible with a particular $K_m$ schedule, and they cannot be calculated with a constant $K_m$, as here. So, we compute bootstrapped error bars:
rb_results.compute_bootstrap_error_bars(seed=0)
Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Generating non-parametric dataset. Bootstrapped error bars computed. Use print methods to access.
Now that we've generated (bootstrapped) error bars, we can print them using the same print methods as before:
rb_results.print_results()
RB results - Fitting to the first order fitting function: A + (B+Cm)*f^m. - Boostrapped-derived error bars (1 sigma). A = 0.5158046259207254 +/- 0.0 B = 0.48752902145336097 +/- 5.579080615598709e-17 C = -8.702270289706322e-09 +/- 0.0 f = 0.996307105067199 +/- 1.1158161231197418e-16 r = 0.0018464474664005026 +/- 5.579080615598709e-17
We can also manually extract the error bars, and other results parameters. For example:
print(rb_results.results['r'])
print(rb_results.results['r_error_BS'])
0.0018464474664005026 5.579080615598709e-17